fix: Guard -1 sentinel in manual process endpoint (#160)#163
fix: Guard -1 sentinel in manual process endpoint (#160)#163deucebucket merged 2 commits intodevelopfrom
Conversation
Vibe-check caught that the -1 rate-limit sentinel from
process_queue leaks into the /api/process endpoint, causing
{processed: -1, verified: -1} in the API response. Guard with
max(0, l2_processed). Also update process_queue docstring to
document the -1 return value.
There was a problem hiding this comment.
🔍 Vibe Check Review
Context
This PR adds a max(0, l2_processed) guard in api_process (the manual /api/process endpoint) to prevent the -1 rate-limit sentinel introduced by PR #162 from corrupting total_processed, plus a docstring update documenting that contract.
Codebase Patterns I Verified
- Sentinel contract established in PR #162:
process_queuewas changed fromreturn 0, 0→return -1, 0on rate limit (SHAbae9c70→7b86ec3). PR #163 is built on top of that change. - Worker.py already guarded (PR #162):
layer 4loop inworker.py:387–393already handlesprocessed == -1with a 30s sleep + continue — that path is covered. - Only one other call site in app.py:
process_queueis called atapp.py:7684(the one being fixed). No other unguarded callers in app.py. Confirmed via grep. l2_fixedis always 0 on rate-limit:process_queuereturns(-1, 0)— second element is always 0 — sototal_fixed += l2_fixedis safe without a guard.- Downstream
verified = processed - fixedatapp.py:7697would have produced-1if rate-limited without this fix. Now it stays at0.
✅ Good
- Minimal, surgical fix — exactly the right scope for a follow-on sentinel guard.
max(0, l2_processed)is the correct idiom: no special-case branching, works for any negative sentinel value, doesn't mask future bugs (a -2 would also be clamped rather than corrupting the total).- Docstring update accurately documents the contract, which is the right place to record it given the function signature doesn't change.
- Comment
# Issue #160:creates a traceable link to the root bug.
🚨 Issues Found
No issues found.
📋 Scope Verification
| Issue | Problem | Addressed? | Notes |
|---|---|---|---|
| #160 | Rate-limited batches falsely trigger exhaustion | ✅ | Core fix was PR #162; this PR closes the remaining gap in the manual endpoint call path |
| #162 (merged) | Sentinel introduced by this fix needed guarding in api_process |
✅ | Exactly what this PR does |
Scope Status: SCOPE_OK
📝 Documentation Check
This is a fix: PR. No CHANGELOG entry or version bump is included. Given this is a pure defensive guard with zero user-visible behavior change (rate-limiting was already handled in the worker; the manual endpoint just returned a slightly wrong count that nobody would catch), skipping a version bump is defensible. If the project pattern is "every fix gets a bump," that's LOW and doesn't block merge.
- CHANGELOG.md:
⚠️ Not updated (minor, acceptable for a follow-on patch to an already-documented fix) - README.md: N/A
🎯 Verdict
APPROVE
Clean, focused, correct. The max(0, l2_processed) guard is the right fix for the right call site. No regressions, no scope creep, no security or correctness issues.
Summary
Addresses vibe-check review findings on PR #162. The
-1rate-limit sentinel fromprocess_queue()leaks into the/api/processendpoint (manual "Process Batch" button), causing{processed: -1, verified: -1}in the API response when the user is rate-limited.total_processed += max(0, l2_processed)inapp.py:7782process_queuedocstring to document the-1return valueTest plan
processed: 0not-1